Search Results: "vivi"

2 December 2016

Shirish Agarwal: Air Congestion and Politics

Confession time first I am not a frequent flyer at all. My first flight was in early late 2006. It was a 2 hour flight from Bombay (BOM) to Bengaluru (formerly Bangalore, BLG) . I still remember the trepidation, the nervousness and excitement the first time I took to air. I still remember the flight very vividly, It was a typical humid day for Bombay/Mumbai and we (me and a friend) had gone to Sahar (the domestic airport) to take the flight in the evening. Before starting the sky had turned golden-orange and I was wondering how I would feel once I would be in air.We started at around 20:00 hours in the evening and as it was a clear night were able to see the Queen s necklace (Marine Drive) in all her glory. The photographs on the wikipedia page don t really do justice to how beautiful the whole boulevard looks at night, especially how it looks from up there. While we were seeing, it seemed the pilot had actually banked at 45 degrees angle so we can have the best view of the necklace OR maybe the pilot wanted to take a photo OR ME being in overdrive (like Robin Williams, the Russian immigrant in Moscow on the Hudson experiences the first time he goes to the mall ;)) In either way, this would be an experience I would never forget till the rest of my life. I remember I didn t move an inch (even to go the loo) as I didn t want to let go of the whole experience. While I came back after 3-4 days, I still remember re-experiencing/re-imagining the flights for a whole month each time I went to sleep. While I can t say it has become routinised, but have been lucky to have the opportunity to fly domestic around the country primarily for work. After the initial romanticism wears off, you try and understand the various aspects of the flight which are happening around you. These experiences are what lead to file/share today s blog post. Yesterday, Ms. Mamata Banerjee, one of the leaders of the Opposition cried wolf because the Aircraft was circling the Airport. Because she is the Chief Minister she feels she should have got precedent or at least that seems to be the way the story unfolded on TV. I have been about 15-20 times on flight in the last decade for work or leisure. Almost all the flights I have been, it has been routine that the flights fly around the Airport for 15-20 minutes before landing. This is routine . I have seen Airlines being stacked (remember the scene from Die Hard 2 where Holly Mclane, John Mclane s wife looks at different aircraft at different altitudes from her window seat) this is what an Airport has to do when it doesn t have enough runaways. In fact just read few days back MIAL is going for an emergency expansion as they weren t expecting as many passengers as they did this year as well as last. In fact the same day there was a near-miss between two aircraft in Mumbai airport itself. Because of Ms. Mamata s belligerence, this story didn t even get a mention in the TV mainstream media. The point I wanna underscore is that this is a fact of life and not just in India, world-over it seems hubs are being busier than ever, for instance Heathrow has been also a busy bee and they will to rework air operations as per a recent article . In India, Kolkata is also one of the busier airports . If anything, I hope it teaches her the issues that plague most Indian airports and she works with the Government in Center so the Airport can expand more. They just got a new terminal three years back. It is for these issues that the Indian Government has come with the Regional Connectivity Scheme . Lastly, a bit of welcome news to people thinking to visit India, the Govt. of the day is facilitating easier visa norms to increase tourism and trade to India. Hope this is beneficial to all and any Debian Developers who wanna come visit India I do hope that we also do get reciprocity from those countries as well.
Filed under: Miscellenous Tagged: # Domestic Flights, #Air Congestion, #Airport Expansion, #Kolkata, #near-miss, #Visa for tourists

29 August 2016

Russell Coker: Monitoring of Monitoring

I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding. The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who s not a sysadmin SATA provides more benefits over IDE than for some other use cases. I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing stiction which is where the heads stick to the platters and the drive motor isn t strong enough to pull them off. In some cases hitting a drive will get it working again, but I m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company. The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I ll probably never know. Doing it Properly Ever since RAID was introduced there was never an excuse for having a single disk on it s own with important data. Linux Software RAID didn t support online rebuild when 10G was a large disk. But since the late 90 s it has worked well and there s no reason not to use it. The probability of a single IDE disk surviving long enough on it s own to capture useful security data is not particularly good. Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don t recommend deploying BTRFS in embedded systems any time soon. Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I ve had it notify me of some problems with my laptop that I wouldn t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn t immediately notice if they crashed (such as cron and syslogd). There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there s a problem with the hard drive etc and also if the system fails to send a I m OK message for a certain period of time. I don t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it s significantly reducing the value derived from monitoring.

25 August 2016

Francois Marier: Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager. The solution I found was to install this missing package:
apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:
DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors
This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):
[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME
I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:
Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achev  avec l' tat 1
init: D connect  du bus D-Bus notifi 
init: Le processus logrotate main (11831) a  t  tu  par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a  t  tu  par le signal TERM
Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....
It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:
cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop
and then adding --debug to the command line inside gnome-debug.desktop:
[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug
After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again. This time, I had a lot more information in ~/.xsession-errors:
gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127
which suggests that gnome-shell won't start because of a missing library.

Finding the missing library To find the missing library, I used the apt-file command:
apt-file update
apt-file search libwayland-egl.so.1
and found that this file is provided by the following packages:
  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial
Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial. I filed a bug for this on Launchpad.

6 August 2016

Mirco Bauer: Ethereum GPU Mining on Linux How-To

TL;DR Install/use Debian 8 or Ubuntu 16.0.4 then execute:
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update
sudo apt-get install ethereum ethminer
geth account new
# copy long character sequence within  , that is your <YOUR_WALLET_ADDRESS>
# if you lose the passphrase, you lose your coins!
sudo apt-get install linux-headers-amd64 build-essential
chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run
ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200
echo done
My Attention Span is > 60 seconds Ethereum is a crypto currency similar to Bitcoin as it is based on the blockchain technology. Ethereum is not yet another Bitcoin clone though, since it has an additional feature called Smart Contracts that makes it unique and very promising. I am not going into details how Ethereum works, you can get that into great detail on the Internet. This post is about Ethereum mining. Mining is how crypto coins are created. You need to spent computing time to get coins out. At the beginning CPU mining was sufficient, but as the Ethereum network difficulty has increased you need to use GPUs as they can calculate at a much higher hashrate than a general purpose CPU can do. About 2 months ago I bought a new gaming rig, with a Nvidia GTX 1070 so I can experience virtual-reality gaming with a HTC Vive at a great framerate. As it turns out modern graphics cards are very good at hashing so I gave it a spin. Initially I did this mining setup with Windows 10, as that is the operating system on my gaming rig. If you want to do Ethereum mining using your GPU, then you really want to use Linux. On Windows the GTX 1070 produced a hashrate of 6 MH/s (megahashes per second) while the same hardware does 25 MH/s on Linux. The hashrate multiplied by 4 by using Linux instead of Windows. Sounds good? Keep reading and follow this guide. You have to pick a Linux distro to use for mining. As I am a Debian developer, all my systems run Debian, which is what I am also using for this guide. The same procedure can be done for Ubuntu as it is similar enough. For other distros you have to substitute the steps yourself. So I assume you already have Debian 8 or Ubuntu 16.04 installed on your system.

Install Ethereum Software First we need the geth tool which is the main Ethereum "client". Ethereum is really a peer-to-peer network, that means each node is a server and client at the same time. A node that contains the complete blockchain history in a database is called a full node. For this guide you don't need to run a full node, as mining pools do this for you. We still need geth to create the private key of your Ethereum wallet. Somewhere we have to receive the coins we are mining ;) Add the Ethereum APT repository using these commands:
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo apt-get update
On Debian 8 (on Ubuntu you can skip this) you need to replace the repository name with this command:
sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update
Install ethereum, ethminer and geth:
sudo apt-get install ethereum ethminer geth

Create Ethereum Wallet A wallet is where coins are "stored". They are not really stored in the wallet because the wallet is just a private key that nobody has. The balance of that wallet is visible to everyone using the blockchain database. And this is what full nodes do, they contain and distribute the database to all other peers. So this this command to create your first private key for your wallet:
geth account new
Be aware, that this passphrase protects the private key of your wallet. Anyone who has access to that file and knows your passphrase will have full control over your coins. And also do not forget the passphrase, as if you do, you lost all your coins! The output of "geth account new" shows a long character/number sequence quoted in . This is your wallet address and you should write that number down, as if someone wants to send you money, then it is to that address. We will use that for the mining pool later.

Install (proprietary) nvidia driver For OpenCL to work with nvidia graphics cards, like my GTX 1070, you need to install this proprietary driver from nvidia. If you have an older card maybe the opensource drivers will work for you. For the nvidia pascal cards numbers 10xx you will need this driver package. After you have agreed the terms, download the NVIDIA-Linux-x86_64-367.35.run file. But before we can use that installer we need to install some dependencies that installer needs as it will have to compile a Linux kernel module for you. Install the dependencies using this command:
sudo apt-get install linux-headers-amd64 build-essential
Now we can make the installer executable and run it like this:
chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run
If that step completed without error, then we should be able to run the mining benchmark!
ethminer -M -G
The -M means "run benchmark" and the -G is for GPU mining. The first time you run it it will create a DAG file and that will takes a while. For me it took about 12 minutes on my GTX 1070. After that is should show a inner mean hashrate. If it says H/s that is hashes per second and KH is kilo (H/1000) and MH is megahashes per second (KH/1000). I had numbers around 25-30 MH/s, but for real mining you will see an average that is a balanced number and not a min/max range.

Pick Ethereum Network Now it gets serious, you need to decide 2 things. First which Ethereum network you want to mine for and the second is using which pool. Ethereum has 2 networks, one is called Ethereum One or Core, while the other is called Ethereum Classic. Ethereum has made a hardfork to undo the consequences of a software bug in the DAO. The DAO is a smart contract for a decentralized organization. Because of that bug, a blackhat could use that bug to obtain money from that DAO. The Ethereum developers made a poll and decided that the consequences will be undone. Not everyone agreed and the old network stayed alive and is now called Ethereum Classic short ETC. The hardfork kept its short name ETH. This is important to understand for mining, because the hashing difficulty has a huge difference between ETH and ETC. As of writing, the hashrate of ETC is at 20% compared to ETH. Thus you need less computing time to get ETC coins and more time to get ETH coins. Differently said, ETC mining is currently more profitable.

Pick a Pool Hmmmm, I want a swimming pool, thanks! Just kidding... You can mine without a pool, that is called solo mining, but you will get less reward. A mining pool are multiple computers that work on the same block to find a solution quicker than others. The pool has an aggregated hashrate that is higher than other solo miners. Each found block by anyone in this pool will be rewarded to everyone in the pool. The reward of 5 ether currently per block gets split in the same ratio of hashrate each member provides (minus the pool fee). So while you get less for a found block, you still have a steady lower income rate instead of higher with less chance of finding one (in time). Simply said: you have to find a new block faster than the others to receive the reward. If you want to mine Ethereum Classic (ETC) use one of the pools listed here (at the bottom of the page). If you want to mine Ethereum One / Core (ETH) use one of the pools listed here.

Run ethminer The instruction page of the pool website usually says how to start the miner program, but here is an example of the pool that I use (because pony!):
ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200

Profit If this guide was helpful for you, you can tip me at ethereum:0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 or simply run the ethminer using my wallet address for a day or two:
ethminer -G -F http://yolo.ethclassic.faith:9999/0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 --farm-recheck 200
Happy mining!

21 June 2016

Russ Allbery: Review: Furiously Happy

Review: Furiously Happy, by Jenny Lawson
Publisher: Flatiron
Copyright: September 2015
ISBN: 1-250-07700-1
Format: Hardcover
Pages: 329
Jenny Lawson, who blogs as The Bloggess, is best known for a combination of being extremely funny and being extremely open about her struggles with mental illness (a combination of anxiety and depression, alongside a few other problems). Her first book primarily told the story of her family, childhood, and husband. Furiously Happy is a more random and disconnected book, but insofar as it has a theme, it's about surviving depression, anxiety, and other manifestations of your brain being an asshole. I described Lawson's previous book, Let's Pretend This Never Happened, as the closest thing I've found to a stand-up comedy routine in book form. Furiously Happy is very similar, but it lacks the cohesiveness of a routine. Instead, it feels like a blog collection: a pile of essays with only some vague themes and not a lot of continuity from essay to essay. This doesn't surprise me. Second books are very different than first books, particularly second books by someone whose writing focus is not writing books, and particularly for non-fiction. I feel like Let's Pretend This Never Happened benefited from drawing on Lawson's full life experience to form the best book she could write. When that became wildly popular, everyone of course wanted a second book, including me. But when the writing is this personal, the second book is, out of necessity, partly leftovers. Lawson's recent experiences don't generate as much material as her whole life up to the point of the first book. That said, there is a bit of a theme, and the title fits it. Early in the book, Lawson describes how, after the death of a friend and a bout of depression, she decided to be furiously, vehemently happy to get back at the universe, to spite its attempts to destroy her mood. It's one of the best bits in this book. The surrounding philosophy is about embracing the moment, enjoying the hell out of everything that one can enjoy, and taking a lot of chances. A lot of the stories in this book come after the beginning of Lawson's fame and popularity. She has book tours, a vacation tour of Australia, and a community of people from her blog. That, of course, doesn't make the depression and anxiety any better; indeed, it provides a lot of material for her anxiety to work with. Lawson talks a lot about surviving, about how important that community is to her, about not believing your brain when it lies to you. This isn't as uniformly funny as her first book, and sometimes it feels a bit too much like an earnest pep talk. But there are also moments of insightful life philosophy mixed into the madcap adventures and stream-of-consciousness wild associations. Lawson also does for anxiety what Allie Brosh does for depression: make the severe form of it relatable to people who have not suffered from it. I was particularly struck by her description of flying: the people around her are getting nervous and anxious as the plane starts to take off, and she's finally able to relax because her anxiety focused on all the things she had to do in order to get onto the right plane at the right time. Once she didn't have to make any decisions or do anything other than sit in one place, her anxiety let go. I don't have any type of clinical anxiety, but I was able to identify with that moment of relief and its contrast with anxiety in a deeper way than with other descriptions. Furiously Happy is a bit more serious and earnest, and I'm not sure it worked as well. I liked Lawson's first book better; this felt more like a blog archive. But she's still funny, entertaining, and delightful, and I'm happy to support her with a book purchase. Start with either her blog or Let's Pretend This Never Happened if you're new to Lawson, but if you're already a fan, here's more of her writing. Rating: 7 out of 10

30 March 2016

Colin Watson: Re-signing PPAs

Julian has written about their efforts to strengthen security in APT, and shortly before that notified us that Launchpad s signatures on PPAs use weak SHA-1 digests. Unfortunately we hadn t noticed that before; GnuPG s defaults tend to result in weak digests unless carefully tweaked, which is a shame. I started on the necessary fixes for this immediately we heard of the problem, but it s taken a little while to get everything in place, and I thought I d explain why since some of the problems uncovered are interesting in their own right. Firstly, there was the relatively trivial matter of using SHA-512 digests on new signatures. This was mostly a matter of adjusting our configuration, although writing the test was a bit tricky since PyGPGME isn t as helpful as it could be. (Simpler repository implementations that call gpg from the command line should probably just add the --digest-algo SHA512 option instead of imitating this.) After getting that in place, any change to a suite in a PPA will result in it being re-signed with SHA-512, which is good as far as it goes, but we also want to re-sign PPAs that haven t been modified. Launchpad hosts more than 50000 active PPAs, though, a significant percentage of which include packages for sufficiently recent Ubuntu releases that we d want to re-sign them for this. We can t expect everyone to push new uploads, and we need to run this through at least some part of our usual publication machinery rather than just writing a hacky shell script to do the job (which would have no idea which keys to sign with, to start with); but forcing full reprocessing of all those PPAs would take a prohibitively long time, and at the moment we need to interrupt normal PPA publication to do this kind of work. I therefore had to spend some quality time working out how to make things go fast enough. The first couple of changes (1, 2) were to add options to our publisher script to let us run just the one step we need in careful mode: that is, forcibly re-run the Release file processing step even if it thinks nothing has changed, and entirely disable the other steps such as generating Packages and Sources files. Then last week I finally got around to timing things on one of our staging systems so that we could estimate how long a full run would take. It was taking a little over two seconds per archive, which meant that if we were to re-sign all published PPAs then that would take more than 33 hours! Obviously this wasn t viable; even just re-signing xenial would be prohibitively slow. The next question was where all that time was going. I thought perhaps that the actual signing might be slow for some reason, but it was taking about half a second per archive: not great, but not enough to account for most of the slowness. The main part of the delay was in fact when we committed the database transaction after processing each archive, but not in the actual PostgreSQL commit, rather in the ORM invalidate method called to prepare for a commit. Launchpad uses the excellent Storm for all of its database interactions. One property of this ORM (and possibly of others; I ll cheerfully admit to not having spent much time with other ORMs) is that it uses a WeakValueDictionary to keep track of the objects it s populated with database results. Before it commits a transaction, it iterates over all those alive objects to note that if they re used in future then information needs to be reloaded from the database first. Usually this is a very good thing: it saves us from having to think too hard about data consistency at the application layer. But in this case, one of the things we did at the start of the publisher script was:
def getPPAs(self, distribution):
    """Find private package archives for the selected distribution."""
    if (self.isCareful(self.options.careful_publishing) or
            self.options.include_non_pending):
        return distribution.getAllPPAs()
    else:
        return distribution.getPendingPublicationPPAs()
def getTargetArchives(self, distribution):
    """Find the archive(s) selected by the script's options."""
    if self.options.partner:
        return [distribution.getArchiveByComponent('partner')]
    elif self.options.ppa:
        return filter(is_ppa_public, self.getPPAs(distribution))
    elif self.options.private_ppa:
        return filter(is_ppa_private, self.getPPAs(distribution))
    elif self.options.copy_archive:
        return self.getCopyArchives(distribution)
    else:
        return [distribution.main_archive]
That innocuous-looking filter means that we do all the public/private filtering of PPAs up-front and return a list of all the PPAs we intend to operate on. This means that all those objects are alive as far as Storm is concerned and need to be considered for invalidation on every commit, and the time required for that stacks up when many thousands of objects are involved: this is essentially accidentally quadratic behaviour, because all archives are considered when committing changes to each archive in turn. Normally this isn t too bad because only a few hundred PPAs need to be processed in any given run; but if we re running in a mode where we re processing all PPAs rather than just ones that are pending publication, then suddenly this balloons to the point where it takes a couple of seconds. The fix is very simple, using an iterator instead so that we don t need to keep all the objects alive:
from itertools import ifilter
def getTargetArchives(self, distribution):
    """Find the archive(s) selected by the script's options."""
    if self.options.partner:
        return [distribution.getArchiveByComponent('partner')]
    elif self.options.ppa:
        return ifilter(is_ppa_public, self.getPPAs(distribution))
    elif self.options.private_ppa:
        return ifilter(is_ppa_private, self.getPPAs(distribution))
    elif self.options.copy_archive:
        return self.getCopyArchives(distribution)
    else:
        return [distribution.main_archive]
After that, I turned to that half a second for signing. A good chunk of that was accounted for by the signContent method taking a fingerprint rather than a key, despite the fact that we normally already had the key in hand; this caused us to have to ask GPGME to reload the key, which requires two subprocess calls. Converting this to take a key rather than a fingerprint gets the per-archive time down to about a quarter of a second on our staging system, about eight times faster than where we started. Using this, we ve now re-signed all xenial Release files in PPAs using SHA-512 digests. On production, this took about 80 minutes to iterate over around 70000 archives, of which 1761 were modified. Most of the time appears to have been spent skipping over unmodified archives; even a few hundredths of a second per archive adds up quickly there. The remaining time comes out to around 0.4 seconds per modified archive. There s certainly still room for speeding this up a bit. We wouldn t want to do this procedure every day, but it s acceptable for occasional tasks like this. I expect that we ll similarly re-sign wily, vivid, and trusty Release files soon in the same way.

13 January 2016

Norbert Preining: Ian Buruma: Wages of Guilt

Since moving to Japan, I got more and more interested in history, especially the recent history of the 20th century. The book I just finished, Ian Buruma (Wiki, home page) Wages of Guilt Memories of War in Germany and Japan (Independent, NYRB), has been a revelation for me. As an Austrian living in Japan, I am experiencing the discrepancy between these two countries with respect to their treatment of war legacy practically daily, and many of my blog entries revolve around the topic of Japanese non-reconciliation.
Willy Brandt went down on his knees in the Warsaw ghetto, after a functioning democracy had been established in the Federal Republic of Germany, not before. But Japan, shielded from the evil world, has grown into an Oskar Matzerath: opportunistic, stunted, and haunted by demons, which it tries to ignore by burying them in the sand, like Oskar s drum.
Ian Buruma, Wages of Guilt, Clearing Up the Ruins
Buruma-Wages_of_Guilt The comparison of Germany and Japan with respect to their recent history as laid out in Buruma s book throws a spotlight on various aspects of the psychology of German and Japanese population, while at the same time not falling into the easy trap of explaining everything with difference in the guilt culture. A book of great depth and broad insights everyone having even the slightest interest in these topics should read.
This difference between (West) German and Japanese textbooks is not just a matter of detail; it shows a gap in perception.
Ian Buruma, Wages of Guilt, Romance of the Ruins
Only thinking about giving a halfway full account of this book is something impossible for me. The sheer amount of information, both on the German and Japanese side, is impressive. His incredible background (studies of Chinese literature and Japanese movie!) and long years as journalist, editor, etc, enriches the book with facets normally not available: In particular his knowledge of both the German and Japanese movie history, and the reflection of history in movies, were complete new aspects for me (see my recent post (in Japanese)). The book is comprised of four parts: The first with the chapters War Against the West and Romance of the Ruins; the second with the chapters Auschwitz, Hiroshima, and Nanking; the third with History on Trial, Textbook Resistance, and Memorials, Museums, and Monuments; and the last part with A Normal Country, Two Normal Towns, and Clearing Up the Ruins. Let us look at the chapters in turn: The boook somehow left me with a bleak impression of Japanese post-war times as well as Japanese future. Having read other books about the political ignorance in Japan (Norma Field s In the realm of a dying emperor, or the Chibana history), Buruma s characterization of Japanese politics is striking. He couldn t foresee the recent changes in legislation pushed through by the Abe government actually breaking the constitution, or the rewriting of history currently going on with respect to comfort women and Nanking. But reading his statement about Article Nine of the constitution and looking at the changes in political attitude, I am scared about where Japan is heading to:
The Nanking Massacre, for leftists and many liberals too, is the main symbol of Japanese militarism, supported by the imperial (and imperialist) cult. Which is why it is a keystone of postwar pacifism. Article Nine of the constitution is necessary to avoid another Nanking Massacre. The nationalist right takes the opposite view. To restore the true identity of Japan, the emperor must be reinstated as a religious head of state, and Article Nine must be revised to make Japan a legitimate military power again. For this reason, the Nanking Massacre, or any other example of extreme Japanese aggression, has to be ignored, softened, or denied.
Ian Buruma, Wages of Guilt, Nanking
While there are signs of resistance in the streets of Japan (Okinawa and the Hanako bay, the demonstrations against secrecy law and reversion of the constitution), we are still to see a change influenced by the people in a country ruled and distributed by oligarchs. I don t think there will be another Nanking Massacre in the near future, but Buruma s books shows that we are heading back to a nationalistic regime similar to pre-war times, just covered with a democratic veil to distract critics.
I close with several other quotes from the book that caught my attention: In the preface and introduction:
[ ] mainstream conservatives made a deliberate attempt to distract people s attention from war and politics by concentrating on economic growth.
The curious thing was that much of what attracted Japanese to Germany before the war Prussian authoritarianism, romantic nationalism, pseudo-scientific racialism had lingered in Japan while becoming distinctly unfashionable in Germany.
In Romance of the Ruins:
The point of all this is that Ikeda s promise of riches was the final stage of what came to be known as the reverse course, the turn away from a leftist, pacifist, neutral Japan a Japan that would never again be involved in any wars, that would resist any form of imperialism, that had, in short, turned its back for good on its bloody past. The Double Your Incomes policy was a deliberate ploy to draw public attention away from constitutional issues.
In Hiroshima:
The citizens of Hiroshima were indeed victims, primarily of their own military rulers. But when a local group of peace activists petitioned the city of Hiroshima in 1987 to incorporate the history of Japanese aggression into the Peace Memorial Museum, the request was turned down. The petition for an Aggressors Corner was prompted by junior high school students from Osaka, who had embarrassed Peace Museum officials by asking for an explanation about Japanese responsibility for the war.
The history of the war, or indeed any history, is indeed not what the Hiroshima spirit is about. This is why Auschwitz is the only comparison that is officially condoned. Anything else is too controversial, too much part of the flow of history .
In Nanking, by the governmental pseudo-historian Tanaka:
Unlike in Europe or China, writes Tanaka, you won t find one instance of planned, systematic murder in the entire history of Japan. This is because the Japanese have a different sense of values from the Chinese or the Westerners.
In History on Trial:
In 1950, Becker wrote that few things have done more to hinder true historical self-knowledge in Germany than the war crimes trials. He stuck to this belief. Becker must be taken seriously, for he is not a right-wing apologist for the Nazi past, but an eminent liberal.
There never were any Japanese war crimes trials, nor is there a Japanese Ludwigsburg. This is partly because there was no exact equivalent of the Holocaust. Even though the behavior of Japanese troops was often barbarous, and the psychological consequences of State Shinto and emperor worship were frequently as hysterical as Nazism, Japanese atrocities were part of a military campaign, not a planned genocide of a people that included the country s own citizens. And besides, those aspects of the war that were most revolting and furthest removed from actual combat, such as the medical experiments on human guinea pigs (known as logs ) carried out by Unit 731 in Manchuria, were passed over during the Tokyo trial. The knowledge compiled by the doctors of Unit 731 of freezing experiments, injection of deadly diseases, vivisections, among other things was considered so valuable by the Americans in 1945 that the doctors responsible were allowed to go free in exchange for their data.
Some Japanese have suggested that they should have conducted their own war crimes trials. The historian Hata Ikuhiko thought the Japanese leaders should have been tried according to existing Japanese laws, either in military or in civil courts. The Japanese judges, he believed, might well have been more severe than the Allied tribunal in Tokyo. And the consequences would have been healthier. If found guilty, the spirits of the defendants would not have ended up being enshrined at Yasukuni. The Tokyo trial, he said, purified the crimes of the accused and turned them into martyrs. If they had been tried in domestic courts, there is a good chance the real criminals would have been flushed out.
After it was over, the Nippon Times pointed out the flaws of the trial, but added that the Japanese people must ponder over why it is that there has been such a discrepancy between what they thought and what the rest of the world accepted almost as common knowledge. This is at the root of the tragedy which Japan brought upon herself.
Emperor Hirohito was not Hitler; Hitler was no mere Shrine. But the lethal consequences of the emperor-worshipping system of irresponsibilities did emerge during the Tokyo trial. The savagery of Japanese troops was legitimized, if not driven, by an ideology that did not include a Final Solution but was as racialist as Hider s National Socialism. The Japanese were the Asian Herrenvolk, descended from the gods.
Emperor Hirohito, the shadowy figure who changed after the war from navy uniforms to gray suits, was not personally comparable to Hitler, but his psychological role was remarkably similar.
In fact, MacArthur behaved like a traditional Japanese strongman (and was admired for doing so by many Japanese), using the imperial symbol to enhance his own power. As a result, he hurt the chances of a working Japanese democracy and seriously distorted history. For to keep the emperor in place (he could at least have been made to resign), Hirohito s past had to be freed from any blemish; the symbol had to be, so to speak, cleansed from what had been done in its name.
In Memorials, Museums, and Monuments:
If one disregards, for a moment, the differences in style between Shinto and Christianity, the Yasukuni Shrine, with its relics, its sacred ground, its bronze paeans to noble sacrifice, is not so very different from many European memorials after World War I. By and large, World War II memorials in Europe and the United States (though not the Soviet Union) no longer glorify the sacrifice of the fallen soldier. The sacrificial cult and the romantic elevation of war to a higher spiritual plane no longer seemed appropriate after Auschwitz. The Christian knight, bearing the cross of king and country, was not resurrected. But in Japan, where the war was still truly a war (not a Holocaust), and the symbolism still redolent of religious exultation, such shrines as Yasukuni still carry the torch of nineteenth-century nationalism. Hence the image of the nation owing its restoration to the sacrifice of fallen soldiers.
In A Normal Country:
The mayor received a letter from a Shinto priest in which the priest pointed out that it was un-Japanese to demand any more moral responsibility from the emperor than he had already taken. Had the emperor not demonstrated his deep sorrow every year, on the anniversary of Japan s surrender? Besides, he wrote, it was wrong to have spoken about the emperor in such a manner, even as the entire nation was deeply worried about his health. Then he came to the main point: It is a common error among Christians and people with Western inclinations, including so-called intellectuals, to fail to grasp that Western societies and Japanese society are based on fundamentally different religious concepts . . . Forgetting this premise, they attempt to place a Western structure on a Japanese foundation. I think this kind of mistake explains the demand for the emperor to bear full responsibility.
In Two Normal Towns:
The bust of the man caught my attention, but not because it was in any way unusual; such busts of prominent local figures can be seen everywhere in Japan. This one, however, was particularly grandiose. Smiling across the yard, with a look of deep satisfaction over his many achievements, was Hatazawa Kyoichi. His various functions and titles were inscribed below his bust. He had been an important provincial bureaucrat, a pillar of the sumo wrestling establishment, a member of various Olympic committees, and the recipient of some of the highest honors in Japan. The song engraved on the smooth stone was composed in praise of his rich life. There was just one small gap in Hatazawa s life story as related on his monument: the years from 1941 to 1945 were missing. Yet he had not been idle then, for he was the man in charge of labor at the Hanaoka mines.
In Clearing Up the Ruins:
But the question in American minds was understandable: could one trust a nation whose official spokesmen still refused to admit that their country had been responsible for starting a war? In these Japanese evasions there was something of the petulant child, stamping its foot, shouting that it had done nothing wrong, because everybody did it.
Japan seems at times not so much a nation of twelve-year-olds, to repeat General MacArthur s phrase, as a nation of people longing to be twelve-year-olds, or even younger, to be at that golden age when everything was secure and responsibility and conformity were not yet required.
For General MacArthur was right: in 1945, the Japanese people were political children. Until then, they had been forced into a position of complete submission to a state run by authoritarian bureaucrats and military men, and to a religious cult whose high priest was also formally chief of the armed forces and supreme monarch of the empire.
I saw Jew S ss that same year, at a screening for students of the film academy in Berlin. This showing, too, was followed by a discussion. The students, mostly from western Germany, but some from the east, were in their early twenties. They were dressed in the international uniform of jeans, anoraks, and work shirts. The professor was a man in his forties, a 68er named Karsten Witte. He began the discussion by saying that he wanted the students to concentrate on the aesthetics of the film more than the story. To describe the propaganda, he said, would simply be banal: We all know the what, so let s talk about the how. I thought of my fellow students at the film school in Tokyo more than fifteen years before. How many of them knew the what of the Japanese war in Asia.

25 October 2015

B lint R czey: Kernel oops collector is back in Debian!

oops-debian The Linux Kernel Oops website collects kernel errors from all over the World helping kernel developers finding issues occurring in the wild but they cannot help if no one sends reports to them. The Kerneloops client used to be part of Debian releases but it has been removed from the archive due to not working with the new collector site. When I started observing oopses on my machine I first thought of submitting a bug against the linux package in BTS, but looking at the numerous bugs opened already I looked for a more automated solution which would also help others. Reviving the kerneloops package involved switching it to the new submission URL, fixing a few memory allocation bugs in C (this is the first package I found using Valgrind by default for running tests) and ensuring that upstream was still active. The last step took the most of the time but finally Anton Arapov kindly accepted my patches and everything was set for the new upload. The package is now available from unstable and if you feel so (especially if you experience oopses) please give it a try and report any problems you find. I m also happy to receive success stories about oopses fixed after discovering and collecting them with the client. :-)

30 September 2015

Ben Armstrong: Halifax Mainland Common: Early Fall, 2015

A friend and I regularly meet to chat over coffee and then usually finish up by walking the maintained trail in the Halifax Mainland Common Park, but today we decided to take a brief excursion onto the unmaintained trails criss-crossing the park. The last gasp of a faint summer and early signs of fall are evident everywhere. Some mushrooms are dried and cracked in a mosaic pattern: image image Ferns and other brush are browning amongst the various greens of late summer: image A few late blueberries still cling to isolated bushes here and there: image The riot of fall colours in this small clearing, dotted with cotton-grass, burst into view as we round a corner, set behind by a backdrop of nearby buildings: image image The ferns here are vivid, like a slow burning fire that will take the rest of fall to burn out: image We appreciate one last splash of colour before we head back under the cover of woods to rejoin the maintained trail: image So many times we ve travelled our usual route on automatic . I m happy today we left the more travelled trail to share in these glimpses of the changing of seasons in a wilderness preserved for our enjoyment immediately at hand to a densely populated part of the city. facebooktwittergoogle_plusredditpinterestlinkedinmail

18 September 2015

Dimitri John Ledkov: Clear Containers for Docker* Engine

Today at work, I announced something James Hunt, Ikey Doherty and myself have been working on. We integrated Clear Containers technology with Docker* Engine to create Clear Containers for Docker* Engine.

After following installation instructions, one can pull and run existing Docker* containers in the secure Clear Containers environment. This means that instead of namespaces, a fast virtual machine is started using the kvmtool hypervisor. This VM is running an optimised minimal Linux* kernel and the optimised Clear Linux* for Intel Architecture Project user-space, with the only goal to execute the Docker* workload and then shut down.

The net effect is almost indistinguishable from typical Docker* container usage:
$ docker run -ti ubuntu:vivid
root@d88a60502ed7:/# systemd-detect-virt
kvm
Apart from, as you see, it's running inside a kvm VM, and thus protected by Intel Virtualization Technology.

This is available on Clear Linux* as well as multiple other operating systems.

I hope this is exciting enough for people to try out, and if you have any feedback, feel free to leave comments or join our mailing list.

*Other names and brands may be claimed as the property of others

The postings on this site are my own and don't necessarily represent Intel s positions, strategies, or opinions.

8 June 2015

Timo Jyrinki: Quick Look: Dell XPS 13 Developer Edition (2015) with Ubuntu 14.04 LTS

I recently obtained the newest Dell's Ubuntu developer offering, XPS 13 (2015, model 9343). I opted in for FullHD non-touch display, mostly because of better battery life, the actual no need for higher resolution, and matte screen which is great outside. Touch would have been "nice-to-have", but in my work I don't really need it.

The other specifications include i7-5600U CPU, 8GB RAM, 256GB SSD [edit: lshw], and of course Ubuntu 14.04 LTS pre-installed as OEM specific installation. It was not possible to directly order it from Dell site, as Finland is reportedly not online market for Dell... The wholesale company however managed to get two models on their lists and so it's now possible to order via retailers. [edit: here are some country specific direct web order links however US, DE, FR, SE, NL]

In this blog post I give a quick look on how I started up using it, and do a few observations on the pre-installed Ubuntu included. I personally was interested in using the pre-installed Ubuntu like a non-Debian/Ubuntu developer would use it, but Dell has also provided instructions for Ubuntu 15.04, Debian 7.0 and Debian 8.0 advanced users among else. Even if not using the pre-installed Ubuntu, the benefit from buying an Ubuntu laptop is obviously smaller cost and on the other hand contributing to free software (by paying for the hardware enablement engineering done by or purchased by Dell).
Unboxing
The Black Box. (and white cat)

Opened box.






First time lid opened, no dust here yet!
First time boot up, transitioning from the boot logo to a first time Ubuntu video.
A small clip from the end of the welcoming video.
First time setup. Language, Dell EULA, connecting to WiFi, location, keyboard, user+password.
Creating recovery media. I opted not to do this as I had happened to read that it's highly recommended to install upgrades first, including to this tool.
Finalizing setup.
Ready to log in!
It's alive!
Not so recent 14.04 LTS image... lots of updates.
Problems in the First BatchUnfortunately the first batch of XPS 13:s with Ubuntu are going to ship with some problems. They're easy to fix if you know how to, but it's sad that they're there to begin with in the factory image. There is no knowledge when a fixed batch will start shipping - July maybe?

First of all, installing software upgrades stops. You need to run the following command via Dash Terminal once: sudo apt-get install -f (it suggests upgrading libc-dev-bin, libc6-dbg, libc6-dev and udev). After that you can continue running Software Updater as usual, maybe rebooting in between.

Secondly, the fixed touchpad driver is included but not enabled by default. You need to enable the only non-enabled Additional Driver as seen in the picture below or instructed in Youtube.

Dialog enabling the touchpad driver.

Clarification: you can safely ignore the two paragraphs below, they're just for advanced users like me who want to play with upgraded driver stacks.

Optionally, since I'm interested in the latest graphics drivers especially in case of a brand new hardware like Intel Broadwell, I upgraded my Ubuntu to use the 14.04.2 Hardware Enablement stack (matches 14.10 hardware support): sudo apt install --install-recommends libgles2-mesa-lts-utopic libglapi-mesa-lts-utopic linux-generic-lts-utopic xserver-xorg-lts-utopic libgl1-mesa-dri-lts-utopic libegl1-mesa-drivers-lts-utopic libgl1-mesa-glx-lts-utopic:i386

Even though it's much better than a normal Ubuntu 14.10 would be since many of the Dell fixes continue to be in use, some functionality might become worse compared to the pre-installed stack. The only thing I have noticed though is the internal microphone not working anymore out-of-the-box, requiring a kernel patch as mentioned in Dell's notes. This is not a surprise since the real eventual upstream support involves switching from HDA to I2S and during 14.10 kernel work that was not nearly done. If you're excited about new drivers, I'd recommend waiting until August when the 15.04 based 14.04.3 stack is available (same package names, but 'vivid' instead of 'utopic'). [edit: I couldn't resist myself when I saw linux-generic-lts-vivid (3.19 kernel) is already in the archives. 14.04.2 + that gives me working microphone again!]
ConclusionDell XPS 13 Developer Edition with Ubuntu 14.04 LTS is an extremely capable laptop + OS combination nearing perfection, but not quite there because of the software problems in the launch pre-install image. The laptop looks great, feels like a quality product should and is very compact for the screen size.

I've moved over all my work onto it and everything so far is working smoothly in my day-to-day tasks. I'm staying at Ubuntu 14.04 LTS and using my previous LXC configuration to run the latest Ubuntu and Debian development versions. I've also done some interesting changes already like LUKS In-Place Conversion, converting the pre-installed Ubuntu into whole disk encrypted one (not recommended for the faint hearted, GRUB reconfiguration is a bit of a pain).

I look happily forward to working a few productive years with this one!

1 June 2015

Junichi Uekawa: Been writing photo viewer in nodejs.

Been writing photo viewer in nodejs. I have a raspberrypi with 3TB HDD to archive camera photo data. It's not easy to retrieve data from it. I tried running geeqie via ssh and it was slow waiting for it to repaint the display. shotwell seems to be too heavyweight. I tried running sshfs and running geeqie but it was very slow waiting for IO to complete. I tried implementing simple web service using nodejs that retrieves a day of photos, and it just crashed browser due to too much data, and the download size was huge. I tried thumbnailing and added on-demand loading, and now browser was surviving better but raspberrypi was spending seconds converting photos to thumbnails. I haven't solved my itch yet.

Russ Allbery: Review: Hild

Review: Hild, by Nicola Griffith
Series: Hild #1
Publisher: Farrar, Straus and Giroux
Copyright: 2013
ISBN: 0-374-28087-8
Format: Hardcover
Pages: 539
Hild was born in seventh-century Britain, daughter of Hereric (would-be king of Daria) and Breguswith. Born, her mother said, to be a light to the land. This much is documented by the Venerable Bede, in The Ecclesiastical History of the English Nation, along with Hild's later rise to become one of the most powerful abbesses in British history. But nearly all of Hild's early life is a cipher. Hild fills in some of that gap with fiction. Specifically, it takes Hild from a child of three, learning her father has been killed, to a young woman, an advisor of Edwin king. It's a coming of age story in part, following her maturation both physically and mentally, her training in when to speak and how, and the dangers of being close to royalty in a fractious, political, and war-torn land. It's also the story of endless maneuvering and care, initially by her mother Breguswith and then by Hild herself sometimes in opposition to her mother, sometimes in alliance as her mother attempts to make a safe place for her daughter and herself in a treacherous court of shifting dangers. But Hild is also a story about Britain. It's a novel about how it felt and how it sounded. How it was organized, primarily among the high-born but with snippets of perspective from the lower classes. And it's a story about women: about weaving, about medicine, about friendships and partnerships and alliances among women, about the politics of marriage and childbirth, but also about the places women held and made in a time when surviving official history is all about the men. Hild is a painstakingly-researched, sprawling, lush, and sensual immersion in a part of history that gets little formal attention: after the Romans, before what we think of as medieval, before England as a country. I've been waiting for a new Nicola Griffith novel for quite a while, since Always in 2007. Hild doesn't disappoint, but it's also different than Griffith's previous writing. It has less of the strong narrative drive and clarity of either her SF or Aud Torvingen stories. Instead, the goal of Hild is to immerse and transport you, to help you feel the shape of Hild's world, to understand her days and tasks, her dreams and dangers. Like all of Griffith's work, it's beautifully written. Griffith puts description at the fore, in the sharp eyes of an observant girl who loves the outdoors, and who has been taught to watch for signs of weather and tools of healing. From early on, Hild's mother sets her up as a seer and prophetess as a way of establishing her value to war leaders and kings, and while some of it is drama and cryptic words, so much of it is careful observation, networks of information-gathering, and sharp deduction about the motives and politics of surrounding kings. Hild is very good at what she does because she has a sharp, quick mind that has been carefully trained, and because she has the aid of her mother's networks and then aid in building her own. Griffith does a wonderful job showing the reader what Hild sees, how she appreciates the world both for its own beauty and for the information she can gather from it, and how to build influence by navigating tense and dangerous moments: waiting for just the right moment and the right word, and taking sudden, impulsive risks and accepting their consequences. Unfortunately for me, this setting is also rich in complex politics and numerous actors, with older and unfamiliar names, and I got lost. Constantly. That's the drawback to immersion: Griffith doesn't hold the reader's hand. We get Hild's thoughts and analysis, and the reader has to keep up. Sometimes I did; sometimes I didn't. There's a dizzying flurry of names here, both personal and place, and while there is a map and a single family tree, neither helped me as much as I wanted them to. At several points, I found myself skimming through the latest shift in the balance between various petty kings because, while I knew I'd seen all the names before, they had come adrift from their context in the story. That was my major frustration with this book. It was all interesting enough that I would have kept thumbing back to a detailed dramatis personae, and indeed I kept checking the family tree, but there just wasn't enough detail there. Even better would have been a brief factual history of the political and military conflicts Hild was living through, keyed by chapter. Hild is startlingly intelligent, leaping from insight to insight, which is wonderful for building character, but which occasionally leaves the reader scrambling to catch up with the connections between her thoughts. I felt like, had I the broader context, I could have understood her insight more readily. All this information is likely available, since Griffith is playing off of documented history, but I'm not the sort of reader who likes doing Internet research while engrossed in a book. So, that's the downside, at least for me. But this book has many strengths, even if you're lost much of the time. Hild as written by Griffith is a fascinating character, full of sharp edges and difficult moods and a powerful belief in what she feels is right. Griffith is at the height of her writing ability when describing Hild making hard choices and taking on burdens that seem too large for her to bear. There are two sections of the book, where Hild is forced by circumstances to lead men in violence, that I think are two of the best bits of writing Griffith has ever published not just because of those scenes themselves, but because of the aftermath, the lingering echos, the way that they shape and inform everything Hild does afterwards. The mingling of reward and loss, maturation and trauma, the sense that the world has shifted both inside and out and it's nearly impossible to say whether the change is for the better or worse. Griffith also knows when not to say too much, and while I found that frustrating for the politics, it does wonders for the characters. Hild's complex and fraught relationship with Gwaldus is the best example. We never know exactly what Gwaldus is thinking; Hild can only guess, and at times one is fairly sure that she guesses wrong. But that doesn't lead to sudden revelations, where the characters finally understand each other. Instead, they both adjust, they maneuver around each other, they find space and understanding where they can, and sometimes they just close off. This book is full of relationships like this: loves that are too complicated for words, bonds that are too dangerous to acknowledge, and characters who can't relax even though they wish the best for each other. At times, it's exhausting reading, but it gives Hild a tension that one wouldn't expect from a sprawling novel full of description and scene-building. Hild is clearly the first book of a series, and leaves quite a lot unresolved. If you want closure in relationships and in politics, there's a lot here that you may find frustrating. And if, like me, you struggle to keep names and politics straight, you're probably going to get lost. But it's well worth the effort for the description, for Hild's thought processes, and for a few haunting scenes that I will be replaying in my head for a very long time. Expect to take your time with this, and wait until you're in the mood for immersion and puzzling out context as you go, but recommended. I suspect it would be even better on a second reading. Rating: 8 out of 10

14 March 2015

Dimitri John Ledkov: Intel CPU microcode support in ubuntu-drivers-common

Ubuntu Vivid Vervet 15.04 is on its final approach to release at the end of next month. Here is a highlight of one of the features that I have helped to land.

ubuntu-drivers-common is a framework to detect hardware-dependent components on user's machine and offer to install additional packages to enable better support for such hardware. Typical examples are drivers for the graphics cards. This cycle I have added CPU family detection plugin, which helps to detect cpu family and install appropriate microcode update. E.g. if one is running Intel CPU, intel-microcode package is installed.

Check out:
$ ubuntu-drivers devices
$ ubuntu-drivers list
$ ubuntu-drivers autoinstall

15 February 2015

Antonio Terceiro: rmail: reviving upstream maintaince

It is always fun to write new stuff, and be able to show off that shiny new piece of code that just come out of your brilliance and/or restless effort. But the world does not spin based just on shiny things; for free software to continue making the world work, we also need the dusty, and maybe and little rusty, things that keep our systems together. Someone needs to make sure the rust does not take over, and that these venerable but useful pieces of code keep it together as the ecosystem around them evolves. As you know, Someone is probably the busiest person there is, so often you will have to take Someone s job for yourself. rmail is a Ruby library able to parse, modify, and generate MIME mail messages. While handling transitions of Ruby interpreters in Debian, it was one of the packages we always had to fix for new Ruby versions, to the point where the Debian package has accumulated quite a few patches. The situation became ridiculous. It was considered to maybe drop it from the Debian archive, but dropping it would mean either also dropping feed2imap and sup or porting both to other mail library. Since doing this type of port is always painful, I decided instead to do something about the sorry state in which rmail was on the upstream side. The reasons why it was not properly maintained upstream does not matter: people lose interest, move on to other projects, are not active users anymore; that is normal in free software projects, and instead of blaming upstream maintainers in any way we need to thank them for writing us free software in the first place, and step up to fix the stuff we use. I got in touch with the people listed as owner for the package on rubygems.org, and got owner permission, which means I can now publish new versions myself. With that, I cloned the repository where the original author had imported the latest code uploaded to rubygems and had started to receive contributions, but that repository was inactive for more than one year. It had already got some contributions from the sup developers which never made it in a new rmail release, so the sup people started using their own fork called rmail-sup . Already in my repository, I have imported all the patches that still made sense from the Debian repository, did a bunch of updates, mainly to modernize the build system, and did a 1.1.0 release to rubygems.org. This release is pretty much compatible with 1.0.0, but since I did not test it with Ruby versions older than than one in my work laptop (2.1.5), I bumped the minor version number as warning to prospective users still on older Ruby versions. In this release, the test suite passes 100% clean, what always gives my mind a lot of comfort:

$ rake
/usr/bin/ruby2.1 -I"lib:." -I"/usr/lib/ruby/vendor_ruby" "/usr/lib/ruby/vendor_ruby/rake/rake_test_loader.rb" "test/test*.rb"
Loaded suite /usr/lib/ruby/vendor_ruby/rake/rake_test_loader
Started
...............................................................................
...............................................................................
........
Finished in 2.096916712 seconds.
166 tests, 24213 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
100% passed
79.16 tests/s, 11546.95 assertions/s

And in the new release I have just uploaded to the Debian experimental suite (1.1.0-1), I was able to drop all of the patches and just use the upstream source as is. So that s it: if you use rmail for anything, consider testing version 1.1.0-1 from Debian experimental, or 1.1.0 from rubygems.org if you into that, and report any bugs to the [github repository](https://github.com/terceiro/rmail). My only commitment for now is keep it working, but if you want to add new features I will definitively review and merge them.

21 January 2015

Dimitri John Ledkov: Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available

I'm happy to announce that Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available for consumption.

These are 1.10.3 & 0.155 respectfully.

This means that everyone should start porting their reports, tools, and scriptage to python3.

ubuntu-dev-tools has the library portion ported to python3, as I did not dare to switch individual scripts to python3 without thorough interactive testing. Please help out porting those and/or file bug reports against the python3 port. Feel free to subscribe me to the bug reports on launchpad.

For the time being, I believe some things will not be easy to port to python3 because of the elephant in the room - bzrlib. For some things like lp-shell, it should be easy to move away from bzrlib, as non-vcs things are used there. For other things the current suggestion is to probably fork to bzr binary or a python2 process. I ponder if a minimal usable python3-bzrlib wrapper around python2 bzrlib is possible to satisfy the needs of basic and common scripts.

On a side note, launchpadlib & lazr.restfulclient have out of the box proxy support enabled. This makes things like add-apt-repository work behind networks with such setup. I think a few people will be happy about that.

All of these goodies are available in Ubuntu 15.04 (Vivid Vervet) or Debian Experimental (and/or NEW queue).

2 December 2014

Matthias Klumpp: How to build a cross-distro package with Limba

Disclaimer: Limba is stilllimba-small in a very early stage of development. Bugs happen, and I give to guarantees on API stability yet. Limba is a very simple cross-distro package installer, utilizing OverlayFS found in recent Linux kernels (>= 3.18). As example I created a small Limba package for one of the Qt5 demo applications, and I would like to share the process of creating Limba packages it s quite simple, and I could use some feedback on how well the resulting packages work on multiple distributions. I assume that you have compiled Limba and installed it how that is done is described in its README file. So, let s start. 1. Prepare your application The cool thing about Limba is that you don t really have to do many changes on your application. There are a few things to pay attention to, though: This needs to be done so your application will find its data at runtime. Additionally, you need to write an AppStream metadata file, and find out which stuff your application depends on. 2. Create package metadata & install software 1.1 Basics Now you can create the metadata necessary to build a Limba package. Just run
cd /path/to/my/project
lipkgen make-template
This will create a pkginstall directory, containing a control file and a metainfo.xml file, which can be a symlink to the AppStream metadata, or be new metadata. Now, configure your application with /opt/bundle as install prefix (-DCMAKE_INSTALL_PREFIX=/opt/bundle, prefix=/opt/bundle, etc.) and install it to the pkginstall/inst_target directory. 1.2 Handling dependencies If your software has dependencies on other packages, just get the Limba packages for these dependencies, or build new ones. Then place the resulting IPK packages in the pkginstall/repo directory. Ideally, you should be able to fetch Limba packages which contain the software components directly from their upstream developers. Then, open the pkginstall/control file and adjust the Requires line. The names of the components you depend on match their AppStream-IDs (<id/> tag in the AppStream XML document). Any version-relation (>=, >>, <<, <=, <>) is supported, and specified in brackets after the component-id. The resulting control-file might look like this:
Format-Version: 1.0

Requires: Qt5Core (>= 5.3), Qt5DBus (>= 5.3), libpng12
If the specified dependencies are in the repo/ subdirectory, these packages will get installed automatically, if your application package is installed. Otherwise, Limba depends on the user to install these packages manually there is no interaction with the distribution s package-manager (yet?). 3. Building the package In order to build your package, make sure the content in inst_target/ is up to date, then run
lipkgen build pkginstall/
This will build your package and output it in the pkginstall/ directory. 4. Testing the package You can now test your package, Just run
sudo lipa install package.ipk
Your software should install successfully. If you provided a .desktop file in $prefix/share/applications, you should find your application in your desktop s application-menu. Otherwise, you can run a binary from the command-line, just append the version of your package to the binary name (bash-comletion helps). Alternatively, you can use the runapp command, which lets you run any binary in your bundle/package, which is quite helpful for debugging (since the environment a Limba-installed application is run is different from the one of other applications). Example:
runapp $ component_id -$ version :/bin/binary-name
And that s it! :-) I used these steps to create a Limba package for the OpenGL Qt5 demo on Tanglu 2 (Bartholomea), and tested it on Kubuntu 15.04 (Vivid) with KDE, as well as on an up-to-date Fedora 21, with GNOME and without any Qt or KDE stuff installed: qt5demo-limba-kubuntuqt5demo-limba-fedora I encountered a few obstacles when building the packages, e.g. Qt5 initially didn t find the right QPA plugin that has been fixed by adjusting a config file in the Qt5Gui package. Also, on Fedora, a matching libpng was missing, so I included that as well. You can find the packages at Github, currently (but I am planning to move them to a different place soon). The biggest issue with Limba is at time, that it needs Linux 3.18, or an older kernel with OverlayFS support compiled in. Apart from that and a few bugs, the experience is quite smooth. As soon as I am sure there are now hidden fundamental issues, I can think of implementing more features, like signing packages and automatically updating them. Have fun playing around with Limba!

30 October 2014

Alessio Treglia: Handling identities in distributed Linux cloud instances

I ve many distributed Linux instances across several clouds, be them global, such as Amazon or Digital Ocean, or regional clouds such as TeutoStack or Enter. Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea. So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass. While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding extended attributes , i.e. arbitrary information for each user profile. We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will: SecurePass and extended attributes The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called Extended Attributes (or xattrs) and -as you can imagine- is organized as key/value pair. You will need the SecurePass tools to be able to modify users extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:
# apt-get install securepass-tools
ERRATA CORRIGE: securepass-tools hasn t been uploaded to Debian yet, Alessio is working hard to make the package available in time for Jessie though.
For other distributions or previous releases, there s a python package (PIP) available. Make sure that you have pycurl installed and then:
# pip install securepass-tools
While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:
[default]
app_id = xxxxx
app_secret = xxxx
endpoint = https://beta.secure-pass.net/
Where app_id and app_secrets are valid API keys to access SecurePass beta. Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:
# sp-user-xattrs user@domain.net set posixuid 1000
While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid: Install and Configure NSS SecurePass In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:
# apt-get install libnss-securepass
For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from: https://github.com/garlsecurity/nss_securepass Then:
./configure
make
make install (Debian/Ubuntu Only)
For CentOS/RHEL/Fedora you will need to copy files in the right place:
/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2
ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so
The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:
[nss]
realm = company.net
default_gid = 100
default_home = "/home"
default_shell = "/bin/bash"
This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding sp to the passwd entry as follows:
$ grep sp /etc/nsswitch.conf
 passwd:  files sp
Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:
$ getent passwd user
 user:x:1000:100:My User:/home/user:/bin/bash
$ id user
 uid=1000(user)  gid=100(users) groups=100(users)
Using this setup by itself wouldn t allow users to login to a system because the password is missing. We will use SecurePass authentication to access the remote machine. Configure PAM for SecurePass On Debian/Ubuntu, install the RADIUS PAM module with:
# apt-get install libpam-radius-auth
If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL Be aware that this has not being tested with SE-Linux enabled (check off or permissive). On CentOS/RHEL, install the RADIUS PAM module with:
# yum -y install pam_radius
Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat s Bugzilla to include this package also in EPEL 7 Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use secret as our secret password. Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:
radius1.secure-pass.net secret 3
radius2.secure-pass.net secret 3
Of course the secret is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication. In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.
auth required   pam_env.so
auth sufficient pam_radius_auth.so try_first_pass
auth sufficient pam_unix.so nullok try_first_pass
auth requisite  pam_succeed_if.so uid >= 500 quiet
auth required   pam_deny.so
Conclusions Handling many distributed Linux poses several challenges, from software updates to identity management and central logging. In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy. To freely subscribe to securepass beta, join SecurePass on: http://www.secure-pass.net/open
And then send an e-mail to info@garl.ch requesting beta access.

6 October 2014

Julian Andres Klode: A weekend with the Acer Chromebook 13 FHD (AKA nyan-big)

I spent the weekend using almost exclusively my Chromebook 13, on a single charge Saturday and Sunday. Keyboard I think I like the keyboard better now than I used to when I first tried it. It gets nowhere near the ThinkPad X230 one, though; appart from the coating, which my (backlit) X230 unfortunately does not have. Screen While the screen appeared very grainy to me on first sight, having only used IPS screens in the past year, I got used to it over the weekend. I now do not notice much graininess anymore. The contrast still seems extremely poor, the colors are not vivid, and the vertical viewing angles are still a disaster, though. Battery life I think the battery life is awesome. I have 30% remaining now while I am writing this blog post and Chrome OS tells me I still have 3 hours and 19 minutes remaining. It could probably still be improved though, I notice that Chrome OS uses 7-14% CPU in idle normally (and up to 20% in exceptional cases). The maximum power usage I measured using the battery s internal sensor was about 9.2W, that was with 5 Big Buck Bunny 1080p videos played in parallel. Average power consumption is around 3-5W (up to 6.5 with single video playing), depending on brightness, and use. Performance While I do notice a performance difference to my much more high-end Ivy Bridge Core i5 laptop, it turns out to be usable enough to not make me want to throw it at a wall. Things take a bit longer than I am used to, but it is still acceptable. Input: Software Part The user interface is great. There are a lot of gestures available for navigating between windows, tabs, and in the history. For example, horizontally swiping with two finger moves in history, three fingers moves between tabs; and swiping down (or up for Australian scrolling) gives an overview of all windows (like expose on Mac, GNOME s activities, or the multi-tasking thing Maemo used to have). What I miss is a keyboard shortcut like Meta + Left/Right on GNOME which moves the active window to the left/right side of the screen. That would be very useful for mult-tasking situations. Issues I noticed some performance issues. For example, I can easily get the Chromebook to use 85% of a CPU by scrolling on a page with the touchpad or 70% for scrolling by keeping a key pressed (crbug.com/420452). While watching Big Buck Bunny on YouTube, I noticed some (micro) stuttering in the beginning of the film, as well as each time I move in or out of the video area when not in full-screen mode (crbug.com/420582). It also increases CPU usage to about 70%. Running a proper Linux? Today, I tried to play around a bit with Debian wheezy and Ubuntu trusty systems, in a chroot for now. I was trying to find out if I can get an accelerated X server with the standard ChromeOS kernel. The short answer is: No. I tried two things:
  1. Debian wheezy with the binaries from ChromeOS (they have the same xserver version)
  2. Ubuntu trusty with the Nvidia drivers
Unfortunately, they did not work. Option 1 failed because ChromeOS uses glibc 2.15 whereas wheezy uses 1.13. Option 2 failed because the sysfs interface is different between the ChromeOS and Linux4Tegra kernels. I guess I ll have to wait. I also tried booting a custom kernel from USB, but given that the u-boot always sets console= and there is no non-verified u-boot available yet, I could not see any output on the screen :( Maybe I should build a u-boot myself?
Filed under: Chromebook

Julian Andres Klode: A weekend with the Acer Chromebook 13 FHD (AKA nyan-big)

I spent the weekend using almost exclusively my Chromebook 13, on a single charge Saturday and Sunday. Keyboard I think I like the keyboard better now than I used to when I first tried it. It gets nowhere near the ThinkPad X230 one, though; appart from the coating, which my (backlit) X230 unfortunately does not have. Screen While the screen appeared very grainy to me on first sight, having only used IPS screens in the past year, I got used to it over the weekend. I now do not notice much graininess anymore. The contrast still seems extremely poor, the colors are not vivid, and the vertical viewing angles are still a disaster, though. Battery life I think the battery life is awesome. I have 30% remaining now while I am writing this blog post and Chrome OS tells me I still have 3 hours and 19 minutes remaining. It could probably still be improved though, I notice that Chrome OS uses 7-14% CPU in idle normally (and up to 20% in exceptional cases). The maximum power usage I measured using the battery s internal sensor was about 9.2W, that was with 5 Big Buck Bunny 1080p videos played in parallel. Average power consumption is around 3-5W (up to 6.5 with single video playing), depending on brightness, and use. Performance While I do notice a performance difference to my much more high-end Ivy Bridge Core i5 laptop, it turns out to be usable enough to not make me want to throw it at a wall. Things take a bit longer than I am used to, but it is still acceptable. Input: Software Part The user interface is great. There are a lot of gestures available for navigating between windows, tabs, and in the history. For example, horizontally swiping with two finger moves in history, three fingers moves between tabs; and swiping down (or up for Australian scrolling) gives an overview of all windows (like expose on Mac, GNOME s activities, or the multi-tasking thing Maemo used to have). What I miss is a keyboard shortcut like Meta + Left/Right on GNOME which moves the active window to the left/right side of the screen. That would be very useful for mult-tasking situations. Issues I noticed some performance issues. For example, I can easily get the Chromebook to use 85% of a CPU by scrolling on a page with the touchpad or 70% for scrolling by keeping a key pressed (crbug.com/420452). While watching Big Buck Bunny on YouTube, I noticed some (micro) stuttering in the beginning of the film, as well as each time I move in or out of the video area when not in full-screen mode (crbug.com/420582). It also increases CPU usage to about 70%. Running a proper Linux? Today, I tried to play around a bit with Debian wheezy and Ubuntu trusty systems, in a chroot for now. I was trying to find out if I can get an accelerated X server with the standard ChromeOS kernel. The short answer is: No. I tried two things:
  1. Debian wheezy with the binaries from ChromeOS (they have the same xserver version)
  2. Ubuntu trusty with the Nvidia drivers
Unfortunately, they did not work. Option 1 failed because ChromeOS uses glibc 2.15 whereas wheezy uses 1.13. Option 2 failed because the sysfs interface is different between the ChromeOS and Linux4Tegra kernels. I guess I ll have to wait. I also tried booting a custom kernel from USB, but given that the u-boot always sets console= and there is no non-verified u-boot available yet, I could not see any output on the screen :( Maybe I should build a u-boot myself?
Filed under: Chromebook

Next.

Previous.